肺癌近年来一直是最普遍的疾病之一。根据该领域的研究,每年在美国确定超过20万个案件。不受控制的繁殖和肺细胞的生长导致恶性肿瘤形成。最近,深入学习算法,特别是卷积神经网络(CNN),已成为自动诊断疾病的高级方式。本文的目的是审查不同的模型,导致诊断早期肺癌的不同准确性和敏感性,并帮助该领域的医生和研究人员。这项工作的主要目的是确定基于深度学习的肺癌存在的挑战。经过系统地编写了调查,这些调查结合了定期的映射和文献综述,从2016年到2021年审查该领域的32次会议和期刊文章。在分析和审查条款后,正在回答条款中提出的问题。由于对相关文章的完全审查和系统化,本领域,这项研究优于该领域的其他综述文章。
translated by 谷歌翻译
Differentiable Architecture Search (DARTS) has attracted considerable attention as a gradient-based Neural Architecture Search (NAS) method. Since the introduction of DARTS, there has been little work done on adapting the action space based on state-of-art architecture design principles for CNNs. In this work, we aim to address this gap by incrementally augmenting the DARTS search space with micro-design changes inspired by ConvNeXt and studying the trade-off between accuracy, evaluation layer count, and computational cost. To this end, we introduce the Pseudo-Inverted Bottleneck conv block intending to reduce the computational footprint of the inverted bottleneck block proposed in ConvNeXt. Our proposed architecture is much less sensitive to evaluation layer count and outperforms a DARTS network with similar size significantly, at layer counts as small as 2. Furthermore, with less layers, not only does it achieve higher accuracy with lower GMACs and parameter count, GradCAM comparisons show that our network is able to better detect distinctive features of target objects compared to DARTS.
translated by 谷歌翻译
Recent advances in language modeling have enabled new conversational systems. In particular, it is often desirable for people to make choices among specified options when using such systems. We address the problem of reference resolution, when people use natural expressions to choose between real world entities. For example, given the choice `Should we make a Simnel cake or a Pandan cake?' a natural response from a non-expert may be indirect: `let's make the green one'. Reference resolution has been little studied with natural expressions, thus robustly understanding such language has large potential for improving naturalness in dialog, recommendation, and search systems. We create AltEntities (Alternative Entities), a new public dataset of entity pairs and utterances, and develop models for the disambiguation problem. Consisting of 42K indirect referring expressions across three domains, it enables for the first time the study of how large language models can be adapted to this task. We find they achieve 82%-87% accuracy in realistic settings, which while reasonable also invites further advances.
translated by 谷歌翻译
The problem of reversing the compilation process, decompilation, is an important tool in reverse engineering of computer software. Recently, researchers have proposed using techniques from neural machine translation to automate the process in decompilation. Although such techniques hold the promise of targeting a wider range of source and assembly languages, to date they have primarily targeted C code. In this paper we argue that existing neural decompilers have achieved higher accuracy at the cost of requiring language-specific domain knowledge such as tokenizers and parsers to build an abstract syntax tree (AST) for the source language, which increases the overhead of supporting new languages. We explore a different tradeoff that, to the extent possible, treats the assembly and source languages as plain text, and show that this allows us to build a decompiler that is easily retargetable to new languages. We evaluate our prototype decompiler, Beyond The C (BTC), on Go, Fortran, OCaml, and C, and examine the impact of parameters such as tokenization and training data selection on the quality of decompilation, finding that it achieves comparable decompilation results to prior work in neural decompilation with significantly less domain knowledge. We will release our training data, trained decompilation models, and code to help encourage future research into language-agnostic decompilation.
translated by 谷歌翻译
Numerous models have tried to effectively embed knowledge graphs in low dimensions. Among the state-of-the-art methods, Graph Neural Network (GNN) models provide structure-aware representations of knowledge graphs. However, they often utilize the information of relations and their interactions with entities inefficiently. Moreover, most state-of-the-art knowledge graph embedding models suffer from scalability issues because of assigning high-dimensional embeddings to entities and relations. To address the above limitations, we propose a scalable general knowledge graph encoder that adaptively involves a powerful tensor decomposition method in the aggregation function of RGCN, a well-known relational GNN model. Specifically, the parameters of a low-rank core projection tensor, used to transform neighborhood entities in the encoder, are shared across relations to benefit from multi-task learning and incorporate relations information effectively. Besides, we propose a low-rank estimation of the core tensor using CP decomposition to compress the model, which is also applicable, as a regularization method, to other similar linear models. We evaluated our model on knowledge graph completion as a common downstream task. We train our model for using a new loss function based on contrastive learning, which relieves the training limitation of the 1-N method on huge graphs. We improved RGCN performance on FB15-237 by 0.42% with considerably lower dimensionality of embeddings.
translated by 谷歌翻译
Gaussian Mixture Models (GMM) are one of the most potent parametric density estimators based on the kernel model that finds application in many scientific domains. In recent years, with the dramatic enlargement of data sources, typical machine learning algorithms, e.g. Expectation Maximization (EM), encounters difficulty with high-dimensional and streaming data. Moreover, complicated densities often demand a large number of Gaussian components. This paper proposes a fast online parameter estimation algorithm for GMM by using first-order stochastic optimization. This approach provides a framework to cope with the challenges of GMM when faced with high-dimensional streaming data and complex densities by leveraging the flexibly-tied factorization of the covariance matrix. A new stochastic Manifold optimization algorithm that preserves the orthogonality is introduced and used along with the well-known Euclidean space numerical optimization. Numerous empirical results on both synthetic and real datasets justify the effectiveness of our proposed stochastic method over EM-based methods in the sense of better-converged maximum for likelihood function, fewer number of needed epochs for convergence, and less time consumption per epoch.
translated by 谷歌翻译
The process of screening molecules for desirable properties is a key step in several applications, ranging from drug discovery to material design. During the process of drug discovery specifically, protein-ligand docking, or chemical docking, is a standard in-silico scoring technique that estimates the binding affinity of molecules with a specific protein target. Recently, however, as the number of virtual molecules available to test has rapidly grown, these classical docking algorithms have created a significant computational bottleneck. We address this problem by introducing Deep Surrogate Docking (DSD), a framework that applies deep learning-based surrogate modeling to accelerate the docking process substantially. DSD can be interpreted as a formalism of several earlier surrogate prefiltering techniques, adding novel metrics and practical training practices. Specifically, we show that graph neural networks (GNNs) can serve as fast and accurate estimators of classical docking algorithms. Additionally, we introduce FiLMv2, a novel GNN architecture which we show outperforms existing state-of-the-art GNN architectures, attaining more accurate and stable performance by allowing the model to filter out irrelevant information from data more efficiently. Through extensive experimentation and analysis, we show that the DSD workflow combined with the FiLMv2 architecture provides a 9.496x speedup in molecule screening with a <3% recall error rate on an example docking task. Our open-source code is available at https://github.com/ryienh/graph-dock.
translated by 谷歌翻译
社会对社交媒体的依赖不断增长,用户为新闻和信息产生的内容增强了不可靠的资源和虚假内容的影响,这使公众讨论并减少了对媒体的信任。验证此类信息的可信度是一项艰巨的任务,容易受到确认偏见的影响,从而开发了算法技术以区分假新闻和真实新闻。但是,大多数现有的方法都具有挑战性的解释,使得难以建立对预测的信任,并在许多现实世界中(例如,视听功能或出处的可用性)做出不现实的假设。在这项工作中,我们专注于使用可解释的功能和方法对文本内容的虚假新闻检测。特别是,我们开发了一个深层的概率模型,该模型使用各种自动编码器和双向长期记忆(LSTM)网络(LSTM)网络与语义主题相关的特征从贝叶斯混合模型推断出来。使用3个现实世界数据集的广泛的实验研究表明,我们的模型可与最先进的竞争模型达到可比的性能,同时促进从学习的主题中解释模型。最后,我们进行了模型消融研究,以证明整合神经嵌入和主题特征的有效性和准确性是通过在较低维嵌入中可分离性评估性能和定性性来定量的。
translated by 谷歌翻译
近年来,视频广播行业一直在显着增长,特别是向最终用户提供个性化内容。虽然视频广播不断增长,但视频纵向已成为直接向观众传递有针对性消息的关键营销工具。但是,不幸的是,对于宽带电视而言,一个关键问题是电视广告针对广泛的受众,因此缺乏特定用户和个性化的广告内容。在本文中,我们提出了一个深云的广告座系统,并简要描述了我们的方法论以及我们设计的广告放置系统的体系结构,以通过MMT流协议交付视频(VOD)和实时广播电视内容。我们论文的目的是展示如何在未来的5G MEC平台上启用针对性,个性化和用户特定的广告服务,这反过来又具有很高的潜力来增加移动运营商行业的广告收入。
translated by 谷歌翻译
联邦学习(FL)是一种分散的方法,使医院能够在不共享私人患者数据进行培训的情况下协作学习模型。在FL中,参与者医院定期交换培训结果,而不是使用中央服务器培训样品。但是,访问模型参数或梯度可以暴露私人培训数据样本。为了应对这一挑战,我们采用安全的多方计算(SMC)来建立一个保护隐私的联合学习框架。在我们提出的方法中,医院分为集群。在当地培训之后,每家医院在同一集群中分解了其他医院的模型权重,因此没有一家医院可以自己检索其他医院的体重。然后,所有医院总结了收到的权重,将结果发送到中央服务器。最后,中央服务器汇总了结果,检索模型的平均权重并更新模型,而无需访问各个医院的权重。我们在公开可用的存储库《癌症基因组图集》(TCGA)上进行实验。我们将提议框架的性能与差异隐私进行比较,并将平均为基准。结果表明,与差异隐私相比,我们的框架可以实现更高的准确性,而没有隐私泄漏风险,而较高的通信开销则可以实现。
translated by 谷歌翻译